How to explain a machine learning model: HbA1c classification example

نویسندگان

چکیده

Aim: Machine learning tools have various applications in healthcare. However, the implementation of developed models is still limited because challenges. One most important problems lack explainability machine models. Explainability refers to capacity reveal reasoning and logic behind decisions made by AI systems, making it straightforward for human users understand process how system arrived at a specific outcome. The study aimed compare performance different model-agnostic explanation methods using two ML created HbA1c classification.
 Material Method: H2O AutoML engine was used development (Gradient boosting (GBM) default random forests (DRF)) 3,036 records from NHANES open data set. Both global local methods, including metrics, feature analysis Partial dependence, Breakdown Shapley additive plots were utilized 
 Results: While both GBM DRF similar such as mean per class error area under receiver operating characteristic curve, they had slightly variable importance. Local also showed contributions features. Conclusion: This evaluated significance explainable techniques comprehending complicated their role incorporating results indicate that although there are limitations current particularly clinical use, offer glimpse into evaluating model can be enhance or

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

How to Explain Individual Classification Decisions

David Baehrens∗ [email protected] Timon Schroeter∗ [email protected] Technische Universität Berlin Franklinstr. 28/29, FR 6-9 10587 Berlin, Germany Stefan Harmeling∗ [email protected] MPI for Biological Cybernetics Spemannstr. 38 72076 Tübingen, Germany Motoaki Kawanabe† [email protected] Fraunhofer Institute FIRST.IDA Kekulestr.7 12489 Berlin, Germany

متن کامل

Can machine learning explain human learning?

Learning Analytics (LA) has a major interest in exploring and understanding the learning process of humans and, for this purpose, benefits from both Cognitive Science, which studies how humans learn, and Machine Learning, which studies how algorithms learn from data. Usually, Machine Learning is exploited as a tool for analyzing data coming from experimental studies, but it has been recently ap...

متن کامل

Emotion Detection in Persian Text; A Machine Learning Model

This study aimed to develop a computational model for recognition of emotion in Persian text as a supervised machine learning problem. We considered Pluthchik emotion model as supervised learning criteria and Support Vector Machine (SVM) as baseline classifier. We also used NRC lexicon and contextual features as training data and components of the model. One hundred selected texts including pol...

متن کامل

How could a rational analysis model explain?

Rational analysis is an influential but contested account of how probabilistic modeling can be used to construct nonmechanistic but self-standing explanatory models of the mind. In this paper, I disentangle and assess several possible explanatory contributions which could be attributed to rational analysis. Although existing models suffer from evidential problems that question their explanatory...

متن کامل

Applying Machine Learning to Amharic Text Classification

Even though the last years have seen an increasing trend in investigating applying language processing methods to other languages than English, most of the work is still done on very few and mainly European and East-Asian languages. However, there is a need for people all over the World to be able to use their own language when using computers or accessing information on the Internet. This requ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of medicine and palliative care

سال: 2023

ISSN: ['2717-7505']

DOI: https://doi.org/10.47582/jompac.1259507